reconstructed image
A Broader Impacts
MIM to enhance the adversarial robustness of downstream models. It is important to highlight that our paper's focus is specifically on the adversarial robustness of ViTs. It is shown that our method can provide an effective defense against severe adversarial attacks. We propose two hypotheses for explaining the reason behind our method's effectiveness: (1) Given Figure 3 (a) shows the comparison between the results of noise being known and unknown. When the attacker can access the noise, our model's robust accuracy does not improve much as The results indicate that both proposed hypotheses are true.
- Information Technology > Security & Privacy (0.38)
- Government > Military (0.38)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Images determined as overall dissimilar, on the other hand, indicate higher robustness against attack. However, there is no guarantee that these metrics well reflect human opinions, which offers trustworthy judgement for model privacy leakage. In this paper, we comprehensively study the faithfulness of these hand-crafted metrics to human perception of privacy information from the reconstructed images.
A Broader Impacts
MIM to enhance the adversarial robustness of downstream models. It is important to highlight that our paper's focus is specifically on the adversarial robustness of ViTs. It is shown that our method can provide an effective defense against severe adversarial attacks. We propose two hypotheses for explaining the reason behind our method's effectiveness: (1) Given Figure 3 (a) shows the comparison between the results of noise being known and unknown. When the attacker can access the noise, our model's robust accuracy does not improve much as The results indicate that both proposed hypotheses are true.
- Information Technology > Security & Privacy (0.38)
- Government > Military (0.38)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Shadow defense against gradient inversion attack in federated learning
Jiang, Le, Ma, Liyan, Yang, Guang
Federated learning (FL) has emerged as a transformative framework for privacy-preserving distributed training, allowing clients to collaboratively train a global model without sharing their local data. This is especially crucial in sensitive fields like healthcare, where protecting patient data is paramount. However, privacy leakage remains a critical challenge, as the communication of model updates can be exploited by potential adversaries. Gradient inversion attacks (GIAs), for instance, allow adversaries to approximate the gradients used for training and reconstruct training images, thus stealing patient privacy. Existing defense mechanisms obscure gradients, yet lack a nuanced understanding of which gradients or types of image information are most vulnerable to such attacks. These indiscriminate calibrated perturbations result in either excessive privacy protection degrading model accuracy, or insufficient one failing to safeguard sensitive information. Therefore, we introduce a framework that addresses these challenges by leveraging a shadow model with interpretability for identifying sensitive areas. This enables a more targeted and sample-specific noise injection. Specially, our defensive strategy achieves discrepancies of 3.73 in PSNR and 0.2 in SSIM compared to the circumstance without defense on the ChestXRay dataset, and 2.78 in PSNR and 0.166 in the EyePACS dataset. Moreover, it minimizes adverse effects on model performance, with less than 1\% F1 reduction compared to SOTA methods. Our extensive experiments, conducted across diverse types of medical images, validate the generalization of the proposed framework. The stable defense improvements for FedAvg are consistently over 1.5\% times in LPIPS and SSIM. It also offers a universal defense against various GIA types, especially for these sensitive areas in images.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Learning Single Index Models with Diffusion Priors
Tang, Anqi, Chen, Youming, Xue, Shuchen, Liu, Zhaoqiang
Diffusion models (DMs) have demonstrated remarkable ability to generate diverse and high-quality images by efficiently modeling complex data distributions. They have also been explored as powerful generative priors for signal recovery, resulting in a substantial improvement in the quality of reconstructed signals. However, existing research on signal recovery with diffusion models either focuses on specific reconstruction problems or is unable to handle nonlinear measurement models with discontinuous or unknown link functions. In this work, we focus on using DMs to achieve accurate recovery from semi-parametric single index models, which encompass a variety of popular nonlinear models that may have {\em discontinuous} and {\em unknown} link functions. We propose an efficient reconstruction method that only requires one round of unconditional sampling and (partial) inversion of DMs. Theoretical analysis on the effectiveness of the proposed methods has been established under appropriate conditions. We perform numerical experiments on image datasets for different nonlinear measurement models. We observe that compared to competing methods, our approach can yield more accurate reconstructions while utilizing significantly fewer neural function evaluations.
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > China (0.04)
LAMA: Stable Dual-Domain Deep Reconstruction For Sparse-View CT
Ding, Chi, Zhang, Qingchao, Wang, Ge, Ye, Xiaojing, Chen, Yunmei
Inverse problems arise in many applications, especially tomographic imaging. We develop a Learned Alternating Minimization Algorithm (LAMA) to solve such problems via two-block optimization by synergizing data-driven and classical techniques with proven convergence. LAMA is naturally induced by a variational model with learnable regularizers in both data and image domains, parameterized as composite functions of neural networks trained with domain-specific data. We allow these regularizers to be nonconvex and nonsmooth to extract features from data effectively. We minimize the overall objective function using Nesterov's smoothing technique and residual learning architecture. It is demonstrated that LAMA reduces network complexity, improves memory efficiency, and enhances reconstruction accuracy, stability, and interpretability. Extensive experiments show that LAMA significantly outperforms state-of-the-art methods on popular benchmark datasets for Computed Tomography.
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Europe > Switzerland (0.04)
- North America > United States > New York > Rensselaer County > Troy (0.04)
- (4 more...)
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Images determined as overall dissimilar, on the other hand, indicate higher robustness against attack. However, there is no guarantee that these metrics well reflect human opinions, which offers trustworthy judgement for model privacy leakage. In this paper, we comprehensively study the faithfulness of these hand-crafted metrics to human perception of privacy information from the reconstructed images.